Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 248
Filtrar
6.
JAMA ; 331(1): 65-69, 2024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38032660

RESUMO

Importance: Since the introduction of ChatGPT in late 2022, generative artificial intelligence (genAI) has elicited enormous enthusiasm and serious concerns. Observations: History has shown that general purpose technologies often fail to deliver their promised benefits for many years ("the productivity paradox of information technology"). Health care has several attributes that make the successful deployment of new technologies even more difficult than in other industries; these have challenged prior efforts to implement AI and electronic health records. However, genAI has unique properties that may shorten the usual lag between implementation and productivity and/or quality gains in health care. Moreover, the health care ecosystem has evolved to make it more receptive to genAI, and many health care organizations are poised to implement the complementary innovations in culture, leadership, workforce, and workflow often needed for digital innovations to flourish. Conclusions and Relevance: The ability of genAI to rapidly improve and the capacity of organizations to implement complementary innovations that allow IT tools to reach their potential are more advanced than in the past; thus, genAI is capable of delivering meaningful improvements in health care more rapidly than was the case with previous technologies.


Assuntos
Inteligência Artificial , Atenção à Saúde , Inteligência Artificial/normas , Inteligência Artificial/tendências , Atenção à Saúde/métodos , Atenção à Saúde/tendências , Difusão de Inovações
7.
JAMA ; 331(3): 245-249, 2024 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-38117493

RESUMO

Importance: Given the importance of rigorous development and evaluation standards needed of artificial intelligence (AI) models used in health care, nationwide accepted procedures to provide assurance that the use of AI is fair, appropriate, valid, effective, and safe are urgently needed. Observations: While there are several efforts to develop standards and best practices to evaluate AI, there is a gap between having such guidance and the application of such guidance to both existing and new AI models being developed. As of now, there is no publicly available, nationwide mechanism that enables objective evaluation and ongoing assessment of the consequences of using health AI models in clinical care settings. Conclusion and Relevance: The need to create a public-private partnership to support a nationwide health AI assurance labs network is outlined here. In this network, community best practices could be applied for testing health AI models to produce reports on their performance that can be widely shared for managing the lifecycle of AI models over time and across populations and sites where these models are deployed.


Assuntos
Inteligência Artificial , Atenção à Saúde , Laboratórios , Garantia da Qualidade dos Cuidados de Saúde , Qualidade da Assistência à Saúde , Inteligência Artificial/normas , Instalações de Saúde/normas , Laboratórios/normas , Parcerias Público-Privadas , Garantia da Qualidade dos Cuidados de Saúde/normas , Atenção à Saúde/normas , Qualidade da Assistência à Saúde/normas , Estados Unidos
9.
Rev. derecho genoma hum ; (59): 129-148, jul.-dic. 2023.
Artigo em Espanhol | IBECS | ID: ibc-232451

RESUMO

La cuestión de los sesgos en la IA constituye un reto importante en los sistemas de IA. Estos sesgos no surgen únicamente de los datos existentes, sino que también los introducen las personas que utilizan sistemas, que son intrínsecamente parciales, como todos los seres humanos. No obstante, esto constituye una realidad preocupante porque los algoritmos tienen la capacidad de influir significativamente en el diagnóstico de un médico. Análisis recientes indican que este fenómeno puede reproducirse incluso en situaciones en las que los médicos ya no reciben orientación del sistema. Esto implica no sólo una incapacidad para percibir el sesgo, sino también una propensión a propagarlo. Las consecuencias potenciales de este fenómeno pueden conducir a un ciclo que se autoperpetúa y que tiene la capacidad de infligir un daño significativo a las personas, especialmente cuando los sistemas de inteligencia artificial (IA) se emplean en contextos que implican asuntos delicados, como el ámbito de la asistencia sanitaria. En respuesta a esta circunstancia, los ordenamientos jurídicos han ideado mecanismos de gobernanza que, a primera vista, parecen suficientes, especialmente en la Unión Europea. Los reglamentos de reciente aparición relativos a los datos y los que ahora se enfocarán a la inteligencia artificial (IA)*** sirven como ilustración por excelencia de cómo lograr potencialmente una supervisión suficiente de los sistemas de IA. En su aplicación práctica, no obstante, es probable que numerosos mecanismos muestren ineficacia a la hora de identificar los sesgos que surgen tras la integración de estos sistemas en el mercado. Es importante considerar que, en esa coyuntura, puede haber múltiples agentes implicados, en los que se ha delegado predominantemente la responsabilidad. ... (AU)


The issue of bias in AI presents a significant challenge in AI systems. These biases not only arise from existing data but are also introduced by the individuals using the systems, who are inherently biased, like all humans. However, this constitutes a concerning reality because algorithms have the ability to significantly influence a doctor’s diagnosis. Recent analyses indicate that this phenomenon can occur even in situations where doctors are no longer receiving guidance from the system. This implies not only an inability to perceive bias but also a propensity to propagate it. The potential consequences of this phenomenon can lead to a self-perpetuating cycle that has the ability to inflict significant harm on individuals, especially when artificial intelligence (AI) systems are employed in sensitive contexts, such as healthcare. In response to this circumstance, legal frameworks have devised governance mechanisms that, at first glance, seem sufficient, especially in the European Union. Recently emerged regulations regarding data and those now focusing on artificial intelligence (AI) serve as prime illustrations of potentially achieving adequate supervision of AI systems. In practical application, however, numerous mechanisms are likely to show inefficacy in identifying biases arising from the integration of these systems into the market. It is important to consider that, at this juncture, there may be multiple agents involved, predominantly delegated responsibility. Hence, it is imperative to insist on the need to persuade AI developers to implement strict measures to regulate biases inherent in their systems. If the detection of these entities is not achieved, it will pose a significant challenge for others to achieve the same, especially until their presence becomes very noticeable. Another possibility is that the long-term repercussions will be experienced collectively. (AU)


Assuntos
Humanos , Inteligência Artificial/ética , Inteligência Artificial/legislação & jurisprudência , Inteligência Artificial/normas , Viés
14.
Nature ; 620(7972): 47-60, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37532811

RESUMO

Artificial intelligence (AI) is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Here we examine breakthroughs over the past decade that include self-supervised learning, which allows models to be trained on vast amounts of unlabelled data, and geometric deep learning, which leverages knowledge about the structure of scientific data to enhance model accuracy and efficiency. Generative AI methods can create designs, such as small-molecule drugs and proteins, by analysing diverse data modalities, including images and sequences. We discuss how these methods can help scientists throughout the scientific process and the central issues that remain despite such advances. Both developers and users of AI toolsneed a better understanding of when such approaches need improvement, and challenges posed by poor data quality and stewardship remain. These issues cut across scientific disciplines and require developing foundational algorithmic approaches that can contribute to scientific understanding or acquire it autonomously, making them critical areas of focus for AI innovation.


Assuntos
Inteligência Artificial , Projetos de Pesquisa , Inteligência Artificial/normas , Inteligência Artificial/tendências , Conjuntos de Dados como Assunto , Aprendizado Profundo , Projetos de Pesquisa/normas , Projetos de Pesquisa/tendências , Aprendizado de Máquina não Supervisionado
16.
JAMA ; 330(1): 78-80, 2023 07 03.
Artigo em Inglês | MEDLINE | ID: mdl-37318797

RESUMO

This study assesses the diagnostic accuracy of the Generative Pre-trained Transformer 4 (GPT-4) artificial intelligence (AI) model in a series of challenging cases.


Assuntos
Inteligência Artificial , Diagnóstico por Computador , Inteligência Artificial/normas , Reprodutibilidade dos Testes , Simulação por Computador/normas , Diagnóstico por Computador/normas
20.
Nature ; 616(7957): 520-524, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37020027

RESUMO

Artificial intelligence (AI) has been developed for echocardiography1-3, although it has not yet been tested with blinding and randomization. Here we designed a blinded, randomized non-inferiority clinical trial (ClinicalTrials.gov ID: NCT05140642; no outside funding) of AI versus sonographer initial assessment of left ventricular ejection fraction (LVEF) to evaluate the impact of AI in the interpretation workflow. The primary end point was the change in the LVEF between initial AI or sonographer assessment and final cardiologist assessment, evaluated by the proportion of studies with substantial change (more than 5% change). From 3,769 echocardiographic studies screened, 274 studies were excluded owing to poor image quality. The proportion of studies substantially changed was 16.8% in the AI group and 27.2% in the sonographer group (difference of -10.4%, 95% confidence interval: -13.2% to -7.7%, P < 0.001 for non-inferiority, P < 0.001 for superiority). The mean absolute difference between final cardiologist assessment and independent previous cardiologist assessment was 6.29% in the AI group and 7.23% in the sonographer group (difference of -0.96%, 95% confidence interval: -1.34% to -0.54%, P < 0.001 for superiority). The AI-guided workflow saved time for both sonographers and cardiologists, and cardiologists were not able to distinguish between the initial assessments by AI versus the sonographer (blinding index of 0.088). For patients undergoing echocardiographic quantification of cardiac function, initial assessment of LVEF by AI was non-inferior to assessment by sonographers.


Assuntos
Inteligência Artificial , Cardiologistas , Ecocardiografia , Testes de Função Cardíaca , Humanos , Inteligência Artificial/normas , Ecocardiografia/métodos , Ecocardiografia/normas , Volume Sistólico , Função Ventricular Esquerda , Método Simples-Cego , Fluxo de Trabalho , Reprodutibilidade dos Testes , Testes de Função Cardíaca/métodos , Testes de Função Cardíaca/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...